我们探讨了使用物理知识的神经网络急剧加速管理动力系统动态的常用代数方程的解决方案。在暂时稳定性评估方面,传统应用的方法要么携带显着的计算负担,需要模型简化,或使用过于保守的代理模型。传统的神经网络可以规避这些限制,而是面临着高质量训练数据集的高需求,而他们忽略了潜在的控制方程。物理知识的神经网络是不同的:它们将电力系统差分代数方程直接纳入神经网络培训,并大大降低了对训练数据的需求。本文深入潜入物理知识神经网络的电力系统瞬态稳定性评估的性能。介绍一种新的神经网络培训程序,以促进彻底的比较,我们探讨了物理知识的神经网络如何与传统的差分代数求解器和经典神经网络在计算时间,数据要求和预测准确性方面比较。我们说明了昆医生的两国系统的调查结果,并评估了物理知识的神经网络的机会和挑战,用作瞬态稳定性分析工具,突出了进一步开发这种方法的可能途径。
translated by 谷歌翻译
Multimodal learning pipelines have benefited from the success of pretrained language models. However, this comes at the cost of increased model parameters. In this work, we propose Adapted Multimodal BERT (AMB), a BERT-based architecture for multimodal tasks that uses a combination of adapter modules and intermediate fusion layers. The adapter adjusts the pretrained language model for the task at hand, while the fusion layers perform task-specific, layer-wise fusion of audio-visual information with textual BERT representations. During the adaptation process the pre-trained language model parameters remain frozen, allowing for fast, parameter-efficient training. In our ablations we see that this approach leads to efficient models, that can outperform their fine-tuned counterparts and are robust to input noise. Our experiments on sentiment analysis with CMU-MOSEI show that AMB outperforms the current state-of-the-art across metrics, with 3.4% relative reduction in the resulting error and 2.1% relative improvement in 7-class classification accuracy.
translated by 谷歌翻译
通过有限元(FE)模型对工程需求参数(EDP)的计算昂贵估计,同时考虑地震和参数不确定性限制了基于性能的地震工程框架的使用。已经尝试用替代模型代替FE模型,但是,这些模型中的大多数仅是构建参数的函数。这需要重新训练替代物以前未见地震。在本文中,作者提出了一个基于机器学习的替代模型框架,该框架考虑了这两种不确定性,以预测看不见的地震。因此,地震的特征在于使用代表性地面运动套件的SVD计算的正顺序基础。这使人们能够通过随机采样这些权重并将其乘以基础来产生大量的地震。权重以及本构参数作为用EDP作为所需输出的机器学习模型的输入。测试了四个竞争机器学习模型,并观察到一个深神经网络(DNN)给出了最准确的预测。该框架通过使用它成功预测了使用棒模型代表的一层楼和三层建筑的峰值响应来验证该框架,并受到看不见的远场地面运动。
translated by 谷歌翻译
大多数用于边缘计算的强化学习(RL)推荐系统必须在推荐选择期间同步,或者依赖于算法的未经警告拼凑集合。在这项工作中,我们构建了异步凝固策略梯度算法\ citep {kostas2020aSynchronchronous},为此问题提出了一个原则的解决方案。我们提出的算法类可以通过Internet分发,并实时地运行。当给定边缘无法响应具有足够速度的数据请求时,这不是问题;该算法旨在在边缘设置中函数和学习,网络问题是此设置的一部分。结果是一个原则性的理论地接地的RL算法,旨在分布在该异步环境中并学习。在这项工作中,我们详细描述了这种算法和建议的架构类,并且证明它们在异步设置中的实践中运行良好,即使网络质量降低。
translated by 谷歌翻译
本文介绍了一个框架,以捕获先前棘手的优化约束,并通过使用神经网络将其转换为混合构成线性程序。我们编码以可拖动和顽固的约束为特征的优化问题的可行空间,例如微分方程,转到神经网络。利用神经网络的精确混合重新印象,我们解决了混合企业线性程序,该程序将解决方案准确地近似于最初棘手的非线性优化问题。我们将方法应用于交流最佳功率流问题(AC-OPF),其中直接包含动态安全性约束可使AC-OPF棘手。我们提出的方法具有比传统方法更明显的可扩展性。我们展示了考虑N-1安全性和小信号稳定性的电力系统操作方法,展示了如何有效地获得成本优势的解决方案,同时满足静态和动态安全性约束。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing. We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译
Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available. However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses. We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios. The code and dataset will be made available upon acceptance.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译